Goto

Collaborating Authors

 feature comparison


PCRLv2: A Unified Visual Information Preservation Framework for Self-supervised Pre-training in Medical Image Analysis

Zhou, Hong-Yu, Lu, Chixiang, Chen, Chaoqi, Yang, Sibei, Yu, Yizhou

arXiv.org Artificial Intelligence

Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.


Python 3.9 vs Python 3.10: A Feature Comparison

#artificialintelligence

The decade has seen numerous programming languages being developed and updated to make work easier in the programming domain and create the next Artificial Intelligence (AI) or Machine Learning (ML) system. The traditionally known systems were Java, C#, etc. But as time progressed, among all those programming languages, Python has arrived at the top of the list of favourites majorly due to its ease of use with which developers can handle complex coding challenges using Python. Python is a high level, robust programming language and is mainly focused on rapid application development. Because of the core functionalities present, Python has become one of the fastest-growing programming languages and an obvious choice for programmers developing applications using Python on machine learning, AI, big data, and IoT.


What is the best simulation tool for robotics?

Robohub

What is the best simulation tool for robotics? This is a hard question to answer because many people (or their companies) specialize in one tool or another. Some simulators are better at one aspect of robotics than at others. When I'm asked to recommend the best simulation tool for robotics I have to find an expert and hope that they are current and across a wide range of simulation tools in order to give me the best advice, which was why I took particular note of the recent review paper from Australia's CSIRO, "A Review of Physics Simulators for Robotics Applications" by Jack Collins, Shelvin Chand, Anthony Vanderkop, and David Howard, published in IEEE Access (Volume: 9). "We have compiled a broad review of physics simulators for use within the major fields of robotics research. More specifically, we navigate through key sub-domains and discuss the features, benefits, applications and use-cases of the different simulators categorised by the respective research communities. Our review provides an extensive index of the leading physics simulators applicable to robotics researchers and aims to assist them in choosing the best simulator for their use case."

  Country: Oceania > Australia (0.26)
  Genre: Overview (0.76)

SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive Background Prototypes

Chen, Jiacheng, Gao, Bin-Bin, Lu, Zongqing, Xue, Jing-Hao, Wang, Chengjie, Liao, Qingmin

arXiv.org Artificial Intelligence

Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples in support images. Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype. However, this framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only. To address this issue, in this paper, we introduce a complementary self-contrastive task into few-shot semantic segmentation. Our new model is able to associate the pixels in a region with the prototype of this region, no matter they are in the foreground or background. To this end, we generate self-contrastive background prototypes directly from the query image, with which we enable the construction of complete sample pairs and thus a complementary and auxiliary segmentation task to achieve the training of a better segmentation model. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ demonstrate clearly the superiority of our proposal. At no expense of inference efficiency, our model achieves state-of-the results in both 1-shot and 5-shot settings for few-shot semantic segmentation.